Regularized Newton Methods for Convex Minimization Problems with Singular Solutions
نویسندگان
چکیده
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.
منابع مشابه
Truncated regularized Newton method for convex minimizations
Recently, Li et al. (Comput. Optim. Appl. 26:131–147, 2004) proposed a regularized Newton method for convex minimization problems. The method retains local quadratic convergence property without requirement of the singularity of the Hessian. In this paper, we develop a truncated regularized Newton method and show its global convergence. We also establish a local quadratic convergence theorem fo...
متن کاملA Family of SQA Methods for Non-Smooth Non-Strongly Convex Minimization with Provable Convergence Guarantees
We propose a family of sequential quadratic approximation (SQA) methods, the inexact regularized proximal Newton (IRPN) method, to minimize a sum of smooth and non-smooth convex functions. Our proposed algorithm features its strong convergence guarantees even when applied to problems with degenerate solutions, while allowing the inner minimization to be solved inexactly. We prove that IRPN conv...
متن کاملNew Quasi-Newton Optimization Methods for Machine Learning
This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...
متن کاملStochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
We consider a composite convex minimization problem associated with regularized empirical risk minimization, which often arises in machine learning. We propose two new stochastic gradient methods that are based on stochastic dual averaging method with variance reduction. Our methods generate a sparser solution than the existing methods because we do not need to take the average of the history o...
متن کاملIterative Reweighted Singular Value Minimization
In this paper we study general lp regularized unconstrained matrix minimization problems. In particular, we first introduce a class of first-order stationary points for them. And we show that the first-order stationary points introduced in [11] for an lp regularized vector minimization problem are equivalent to those of an lp regularized matrix minimization reformulation. We also establish that...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Comp. Opt. and Appl.
دوره 28 شماره
صفحات -
تاریخ انتشار 2004